61 research outputs found
UCDFormer: Unsupervised Change Detection Using a Transformer-driven Image Translation
Change detection (CD) by comparing two bi-temporal images is a crucial task
in remote sensing. With the advantages of requiring no cumbersome labeled
change information, unsupervised CD has attracted extensive attention in the
community. However, existing unsupervised CD approaches rarely consider the
seasonal and style differences incurred by the illumination and atmospheric
conditions in multi-temporal images. To this end, we propose a change detection
with domain shift setting for remote sensing images. Furthermore, we present a
novel unsupervised CD method using a light-weight transformer, called
UCDFormer. Specifically, a transformer-driven image translation composed of a
light-weight transformer and a domain-specific affinity weight is first
proposed to mitigate domain shift between two images with real-time efficiency.
After image translation, we can generate the difference map between the
translated before-event image and the original after-event image. Then, a novel
reliable pixel extraction module is proposed to select significantly
changed/unchanged pixel positions by fusing the pseudo change maps of fuzzy
c-means clustering and adaptive threshold. Finally, a binary change map is
obtained based on these selected pixel pairs and a binary classifier.
Experimental results on different unsupervised CD tasks with seasonal and style
changes demonstrate the effectiveness of the proposed UCDFormer. For example,
compared with several other related methods, UCDFormer improves performance on
the Kappa coefficient by more than 12\%. In addition, UCDFormer achieves
excellent performance for earthquake-induced landslide detection when
considering large-scale applications. The code is available at
\url{https://github.com/zhu-xlab/UCDFormer}Comment: 16 pages, 7 figures, IEEE Transactions on Geoscience and Remote
Sensin
JEC-QA: A Legal-Domain Question Answering Dataset
We present JEC-QA, the largest question answering dataset in the legal
domain, collected from the National Judicial Examination of China. The
examination is a comprehensive evaluation of professional skills for legal
practitioners. College students are required to pass the examination to be
certified as a lawyer or a judge. The dataset is challenging for existing
question answering methods, because both retrieving relevant materials and
answering questions require the ability of logic reasoning. Due to the high
demand of multiple reasoning abilities to answer legal questions, the
state-of-the-art models can only achieve about 28% accuracy on JEC-QA, while
skilled humans and unskilled humans can reach 81% and 64% accuracy
respectively, which indicates a huge gap between humans and machines on this
task. We will release JEC-QA and our baselines to help improve the reasoning
ability of machine comprehension models. You can access the dataset from
http://jecqa.thunlp.org/.Comment: 9 pages, 2 figures, 10 tables, accepted by AAAI202
MUSER: A Multi-View Similar Case Retrieval Dataset
Similar case retrieval (SCR) is a representative legal AI application that
plays a pivotal role in promoting judicial fairness. However, existing SCR
datasets only focus on the fact description section when judging the similarity
between cases, ignoring other valuable sections (e.g., the court's opinion)
that can provide insightful reasoning process behind. Furthermore, the case
similarities are typically measured solely by the textual semantics of the fact
descriptions, which may fail to capture the full complexity of legal cases from
the perspective of legal knowledge. In this work, we present MUSER, a similar
case retrieval dataset based on multi-view similarity measurement and
comprehensive legal element with sentence-level legal element annotations.
Specifically, we select three perspectives (legal fact, dispute focus, and law
statutory) and build a comprehensive and structured label schema of legal
elements for each of them, to enable accurate and knowledgeable evaluation of
case similarities. The constructed dataset originates from Chinese civil cases
and contains 100 query cases and 4,024 candidate cases. We implement several
text classification algorithms for legal element prediction and various
retrieval methods for retrieving similar cases on MUSER. The experimental
results indicate that incorporating legal elements can benefit the performance
of SCR models, but further efforts are still required to address the remaining
challenges posed by MUSER. The source code and dataset are released at
https://github.com/THUlawtech/MUSER.Comment: Accepted by CIKM 2023 Resource Trac
Denoising Relation Extraction from Document-level Distant Supervision
Distant supervision (DS) has been widely used to generate auto-labeled data
for sentence-level relation extraction (RE), which improves RE performance.
However, the existing success of DS cannot be directly transferred to the more
challenging document-level relation extraction (DocRE), since the inherent
noise in DS may be even multiplied in document level and significantly harm the
performance of RE. To address this challenge, we propose a novel pre-trained
model for DocRE, which denoises the document-level DS data via multiple
pre-training tasks. Experimental results on the large-scale DocRE benchmark
show that our model can capture useful information from noisy DS data and
achieve promising results.Comment: EMNLP 2020 short pape
Emergent Modularity in Pre-trained Transformers
This work examines the presence of modularity in pre-trained Transformers, a
feature commonly found in human brains and thought to be vital for general
intelligence. In analogy to human brains, we consider two main characteristics
of modularity: (1) functional specialization of neurons: we evaluate whether
each neuron is mainly specialized in a certain function, and find that the
answer is yes. (2) function-based neuron grouping: we explore finding a
structure that groups neurons into modules by function, and each module works
for its corresponding function. Given the enormous amount of possible
structures, we focus on Mixture-of-Experts as a promising candidate, which
partitions neurons into experts and usually activates different experts for
different inputs. Experimental results show that there are functional experts,
where clustered are the neurons specialized in a certain function. Moreover,
perturbing the activations of functional experts significantly affects the
corresponding function. Finally, we study how modularity emerges during
pre-training, and find that the modular structure is stabilized at the early
stage, which is faster than neuron stabilization. It suggests that Transformers
first construct the modular structure and then learn fine-grained neuron
functions. Our code and data are available at
https://github.com/THUNLP/modularity-analysis.Comment: Findings of ACL 202
- …